Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译
Accurate and robust extrinsic calibration is necessary for deploying autonomous systems which need multiple sensors for perception. In this paper, we present a robust system for real-time extrinsic calibration of multiple lidars in vehicle base frame without the need for any fiducial markers or features. We base our approach on matching absolute GNSS and estimated lidar poses in real-time. Comparing rotation components allows us to improve the robustness of the solution than traditional least-square approach comparing translation components only. Additionally, instead of comparing all corresponding poses, we select poses comprising maximum mutual information based on our novel observability criteria. This allows us to identify a subset of the poses helpful for real-time calibration. We also provide stopping criteria for ensuring calibration completion. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (7 sequences for a total of ~ 6.5 Km). The results presented in this paper show that our approach is able to accurately determine the extrinsic calibration for various combinations of sensor setups.
translated by 谷歌翻译
Machine learning has emerged recently as a powerful tool for predicting properties of quantum many-body systems. For many ground states of gapped Hamiltonians, generative models can learn from measurements of a single quantum state to reconstruct the state accurately enough to predict local observables. Alternatively, kernel methods can predict local observables by learning from measurements on different but related states. In this work, we combine the benefits of both approaches and propose the use of conditional generative models to simultaneously represent a family of states, by learning shared structures of different quantum states from measurements. The trained model allows us to predict arbitrary local properties of ground states, even for states not present in the training data, and without necessitating further training for new observables. We numerically validate our approach (with simulations of up to 45 qubits) for two quantum many-body problems, 2D random Heisenberg models and Rydberg atom systems.
translated by 谷歌翻译
Visual Inertial Odometry (VIO) is one of the most established state estimation methods for mobile platforms. However, when visual tracking fails, VIO algorithms quickly diverge due to rapid error accumulation during inertial data integration. This error is typically modeled as a combination of additive Gaussian noise and a slowly changing bias which evolves as a random walk. In this work, we propose to train a neural network to learn the true bias evolution. We implement and compare two common sequential deep learning architectures: LSTMs and Transformers. Our approach follows from recent learning-based inertial estimators, but, instead of learning a motion model, we target IMU bias explicitly, which allows us to generalize to locomotion patterns unseen in training. We show that our proposed method improves state estimation in visually challenging situations across a wide range of motions by quadrupedal robots, walking humans, and drones. Our experiments show an average 15% reduction in drift rate, with much larger reductions when there is total vision failure. Importantly, we also demonstrate that models trained with one locomotion pattern (human walking) can be applied to another (quadruped robot trotting) without retraining.
translated by 谷歌翻译
准确的本地化是机器人导航系统的核心组成部分。为此,全球导航卫星系统(GNSS)可以在户外提供绝对的测量,因此消除了长期漂移。但是,将GNSS数据与其他传感器数据进行融合并不是微不足道的,尤其是当机器人在有和没有天空视图的区域之间移动时。我们提出了一种可靠的方法,该方法将原始GNSS接收器数据与惯性测量以及可选的LIDAR观测值紧密地融合在一起,以进行精确和光滑的移动机器人定位。提出了具有两种类型的GNSS因子的因子图。首先,基于伪龙的因素,该因素允许地球上进行全球定位。其次,基于载体阶段的因素,该因素可以实现高度准确的相对定位,这在对其他感应方式受到挑战时很有用。与传统的差异GNS不同,这种方法不需要与基站的连接。在公共城市驾驶数据集上,我们的方法达到了与最先进的算法相当的精度,该算法将视觉惯性探测器与GNSS数据融合在一起 - 尽管我们的方法不使用相机,但仅使用了惯性和GNSS数据。我们还使用来自汽车的数据以及在森林(例如森林)的环境中移动的四倍的机器人,证明了方法的鲁棒性。全球地球框架中的准确性仍然为1-2 m,而估计的轨迹无不连续性和光滑。我们还展示了如何紧密整合激光雷达测量值。我们认为,这是第一个将原始GNSS观察(而不是修复)与LIDAR融合在一起的系统。
translated by 谷歌翻译
标签分布学习(LDL)的概念是一种通过模棱两可和/或不平衡标签稳定分类和回归问题的技术。LDL的原型用例是基于轮廓图像的人类年龄估计。关于这个回归问题,已经开发了一种所谓的深标签分布学习(DLDL)方法。主要思想是标签分布的联合回归及其期望值。但是,原始的DLDL方法使用具有不同数学动机的损耗组件,因此是不同的量表,这就是为什么必须使用超参数的原因。在这项工作中,我们引入了DLDL的损失函数,其组件由Kullback-Leibler(KL)差异完全定义,因此,无需其他超参数而直接可与彼此相提并论。它概括了DLDL关于进一步用例的概念,特别是对于多维或多规模的分配学习任务。
translated by 谷歌翻译
该注释有三个目的:(i)我们提供了一个独立的说明,表明在可能的(PAC)模型中,连接性查询无法有效地学习,从而明确注意这一概念阶级缺乏这一概念的事实,多项式大小的拟合属性,在许多计算学习理论文献中被默认假设的属性;(ii)我们建立了强大的负PAC可学习性结果,该结果适用于许多限制类别的连接性查询(CQ),包括针对广泛的“无循环”概念的无孔CQ;(iii)我们证明CQ可以通过会员查询有效地学习PAC。
translated by 谷歌翻译
同时本地化和映射(SLAM)正在现实世界应用中部署,但是在许多常见情况下,许多最先进的解决方案仍然在困难。进步的SLAM研究的关键是高质量数据集的可用性以及公平透明的基准测试。为此,我们创建了Hilti-Oxford数据集,以将最新的SLAM系统推向其极限。该数据集面临着各种挑战,从稀疏和常规的建筑工地到17世纪的新古典建筑,并具有细节和弯曲的表面。为了鼓励多模式的大满贯方法,我们设计了一个具有激光雷达,五个相机和IMU(惯性测量单元)的数据收集平台。为了对精度和鲁棒性至关重要的任务进行基准测试量算法,我们实施了一种新颖的地面真相收集方法,使我们的数据集能够以毫米精度准确地测量SLAM姿势错误。为了进一步确保准确性,我们平台的外部设备通过微米精确的扫描仪进行了验证,并使用硬件时间同步在线管理时间校准。我们数据集的多模式和多样性吸引了大量的学术和工业研究人员进入第二版《希尔蒂·斯拉姆挑战赛》,该挑战于2022年6月结束。挑战的结果表明,尽管前三名团队可以实现准确性在某些序列中的2厘米或更高的速度中,性能以更困难的序列下降。
translated by 谷歌翻译
机器人技术中的安全运动规划需要已验证的空间规划,这些空间没有障碍。但是,由于其深度测量值的稀疏性,使用LiDARS获得此类环境表示是具有挑战性的。我们提出了一个学习辅助的3D激光雷达重建框架,该框架借助重叠的摄像头图像来为稀疏的激光雷达深度测量,以生成比单独使用原始liDar测量值可以实现更明确的自由空间的较密集的重建。我们使用带有编码器解码器结构的神经网络来预测密集的深度图像以及使用体积映射系统融合的深度不确定性估计。我们在使用手持式传感设备和腿部机器人捕获的现实世界室外数据集上进行实验。我们使用来自16束束激光雷达映射建筑网络的输入数据,我们的实验表明,通过我们的方法,估计的自由空间的量增加了40%以上。我们还表明,我们在合成数据集通用上训练的方法非常适合现实世界户外场景,而无需进行其他微调。最后,我们演示了运动计划任务如何从这些密集的重建中受益。
translated by 谷歌翻译